31 research outputs found

    Linear, Deterministic, and Order-Invariant Initialization Methods for the K-Means Clustering Algorithm

    Full text link
    Over the past five decades, k-means has become the clustering algorithm of choice in many application domains primarily due to its simplicity, time/space efficiency, and invariance to the ordering of the data points. Unfortunately, the algorithm's sensitivity to the initial selection of the cluster centers remains to be its most serious drawback. Numerous initialization methods have been proposed to address this drawback. Many of these methods, however, have time complexity superlinear in the number of data points, which makes them impractical for large data sets. On the other hand, linear methods are often random and/or sensitive to the ordering of the data points. These methods are generally unreliable in that the quality of their results is unpredictable. Therefore, it is common practice to perform multiple runs of such methods and take the output of the run that produces the best results. Such a practice, however, greatly increases the computational requirements of the otherwise highly efficient k-means algorithm. In this chapter, we investigate the empirical performance of six linear, deterministic (non-random), and order-invariant k-means initialization methods on a large and diverse collection of data sets from the UCI Machine Learning Repository. The results demonstrate that two relatively unknown hierarchical initialization methods due to Su and Dy outperform the remaining four methods with respect to two objective effectiveness criteria. In addition, a recent method due to Erisoglu et al. performs surprisingly poorly.Comment: 21 pages, 2 figures, 5 tables, Partitional Clustering Algorithms (Springer, 2014). arXiv admin note: substantial text overlap with arXiv:1304.7465, arXiv:1209.196

    Combining PSO and FCM for Dynamic Fuzzy Clustering Problems

    No full text

    SHADE algorithm dynamic analyzed through complex network

    No full text
    In this preliminary study, the dynamic of continuous optimization algorithm Success-History based Adaptive Differential Evolution (SHADE) is translated into a Complex Network (CN) and the basic network feature, node degree centrality, is analyzed in order to provide helpful insight into the inner workings of this state-of-the-art Differential Evolution (DE) variant. The analysis is aimed at the correlation between objective function value of an individual and its participation in production of better offspring for the future generation. In order to test the robustness of this method, it is evaluated on the CEC2015 benchmark in 10 and 30 dimensions. © 2017, Springer International Publishing AG
    corecore